翻訳と辞書
Words near each other
・ "O" Is for Outlaw
・ "O"-Jung.Ban.Hap.
・ "Ode-to-Napoleon" hexachord
・ "Oh Yeah!" Live
・ "Our Contemporary" regional art exhibition (Leningrad, 1975)
・ "P" Is for Peril
・ "Pimpernel" Smith
・ "Polish death camp" controversy
・ "Pro knigi" ("About books")
・ "Prosopa" Greek Television Awards
・ "Pussy Cats" Starring the Walkmen
・ "Q" Is for Quarry
・ "R" Is for Ricochet
・ "R" The King (2016 film)
・ "Rags" Ragland
・ ! (album)
・ ! (disambiguation)
・ !!
・ !!!
・ !!! (album)
・ !!Destroy-Oh-Boy!!
・ !Action Pact!
・ !Arriba! La Pachanga
・ !Hero
・ !Hero (album)
・ !Kung language
・ !Oka Tokat
・ !PAUS3
・ !T.O.O.H.!
・ !Women Art Revolution


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

probabilistic classification : ウィキペディア英語版
probabilistic classification

In machine learning, a probabilistic classifier is a classifier that is able to predict, given a sample input, a probability distribution over a set of classes, rather than only outputting the most likely class that the sample should belong to. Probabilistic classifiers provide classification with a degree of certainty, which can be useful in its own right, or when combining classifiers into ensembles.
Formally, an "ordinary" classifier is some rule, or function, that assigns to a sample a class label :
:\hat = f(x)
The samples come from some set (e.g., the set of all documents, or the set of all images), while the class labels form a finite set defined prior to training.
Probabilistic classifiers generalize this notion of classifiers: instead of functions, they are conditional distributions \Pr(Y \vert X), meaning that for a given x \in X, they assign probabilities to all y \in Y (and these probabilities sum to one). "Hard" classification can then be done using the optimal decision rule
:\hat = \operatorname_ \Pr(Y=y \vert X)
or, in English, the predicted class is that which has the highest probability.
Binary probabilistic classifiers are also called binomial regression models in statistics. In econometrics, probabilistic classification in general is called discrete choice.
Some classification models, such as naive Bayes, logistic regression and multilayer perceptrons (when trained under an appropriate loss function) are naturally probabilistic. Other models such as support vector machines are not, but methods exist to turn them into probabilistic classifiers.
==Generative and conditional training==
Some models, such as logistic regression, are conditionally trained: they optimize the conditional probability \Pr(Y \vert X) directly on a training set (see empirical risk minimization). Other classifiers, such as naive Bayes, are trained generatively: at training time, the class-conditional distribution \Pr(X \vert Y) and the class prior \Pr(Y) are found, and the conditional distribution \Pr(Y \vert X) is derived using Bayes' rule.〔

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「probabilistic classification」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.